108 research outputs found

    Humanized Recommender Systems: State-of-the-art and Research Issues

    Get PDF

    Considerations for applying logical reasoning to explain neural network outputs

    Get PDF
    We discuss the impact of presenting explanations to people for Artificial Intelligence (AI) decisions powered by Neural Networks, according to three types of logical reasoning (inductive, deductive, and abductive). We start from examples in the existing literature on explaining artificial neural networks. We see that abductive reasoning is (unintentionally) the most commonly used as default in user testing for comparing the quality of explanation techniques. We discuss whether this may be because this reasoning type balances the technical challenges of generating the explanations, and the effectiveness of the explanations. Also, by illustrating how the original (abductive) explanation can be converted into the remaining two reasoning types we are able to identify considerations needed to support these kinds of transformations

    Supporting High-Uncertainty Decisions through AI and Logic-Style Explanations

    Get PDF
    A common criteria for Explainable AI (XAI) is to support users in establishing appropriate trust in the AI - rejecting advice when it is incorrect, and accepting advice when it is correct. Previous findings suggest that explanations can cause an over-reliance on AI (overly accepting advice). Explanations that evoke appropriate trust are even more challenging for decision-making tasks that are difficult for humans and AI. For this reason, we study decision-making by non-experts in the high-uncertainty domain of stock trading. We compare the effectiveness of three different explanation styles (influenced by inductive, abductive, and deductive reasoning) and the role of AI confidence in terms of a) the users' reliance on the XAI interface elements (charts with indicators, AI prediction, explanation), b) the correctness of the decision (task performance), and c) the agreement with the AI's prediction. In contrast to previous work, we look at interactions between different aspects of decision-making, including AI correctness, and the combined effects of AI confidence and explanations styles. Our results show that specific explanation styles (abductive and deductive) improve the user's task performance in the case of high AI confidence compared to inductive explanations. In other words, these styles of explanations were able to invoke correct decisions (for both positive and negative decisions) when the system was certain. In such a condition, the agreement between the user's decision and the AI prediction confirms this finding, highlighting a significant agreement increase when the AI is correct. This suggests that both explanation styles are suitable for evoking appropriate trust in a confident AI. Our findings further indicate a need to consider AI confidence as a criterion for including or excluding explanations from AI interfaces. In addition, this paper highlights the importance of carefully selecting an explanation style according to the characteristics of the task and data

    Misplaced Trust: Measuring the Interference of Machine Learning in Human Decision-Making

    Full text link
    ML decision-aid systems are increasingly common on the web, but their successful integration relies on people trusting them appropriately: they should use the system to fill in gaps in their ability, but recognize signals that the system might be incorrect. We measured how people's trust in ML recommendations differs by expertise and with more system information through a task-based study of 175 adults. We used two tasks that are difficult for humans: comparing large crowd sizes and identifying similar-looking animals. Our results provide three key insights: (1) People trust incorrect ML recommendations for tasks that they perform correctly the majority of the time, even if they have high prior knowledge about ML or are given information indicating the system is not confident in its prediction; (2) Four different types of system information all increased people's trust in recommendations; and (3) Math and logic skills may be as important as ML for decision-makers working with ML recommendations.Comment: 10 page

    Providing awareness, explanation and control of personalized filtering in a social networking site

    Get PDF
    Social networking sites (SNSs) have applied personalized filtering to deal with overwhelmingly irrelevant social data. However, due to the focus of accuracy, the personalized filtering often leads to ā€œthe filter bubbleā€ problem where the users can only receive information that matches their pre-stated preferences but fail to be exposed to new topics. Moreover, these SNSs are black boxes, providing no transparency for the user about how the filtering mechanism decides what is to be shown in the activity stream. As a result, the userā€™s usage experience and trust in the system can decline. This paper presents an interactive method to visualize the personalized filtering in SNSs. The proposed visualization helps to create awareness, explanation, and control of personalized filtering to alleviate the ā€œfilter bubbleā€ problem and increase the usersā€™ trust in the system. Three user evaluations are presented. The results show that users have a good understanding about the filter bubble visualization, and the visualization can increase usersā€™ awareness of the filter bubble, understandability of the filtering mechanism and to a feeling of control over the data stream they are seeing. The intuitiveness of the design is overall good, but a context sensitive help is also preferred. Moreover, the visualization can provide users with better usage experience and increase usersā€™ trust in the system

    Motivated numeracy and active reasoning in a Western European sample

    Get PDF
    Recent work by Kahan et al. (2017) on the psychology of motivated numeracy in the context of intracultural disagreement suggests that people are less likely to employ their capabilities when the evidence runs contrary to their political ideology. This research has so far been carried out primarily in the USA regarding the liberalā€“conservative divide over gun control regulation. In this paper, we present the results of a modified replication that included an active reasoning intervention with Western European participants regarding both the hierarchyā€“egalitarianism and individualismā€“collectivism divides over immigration policy (n = 746; considerably less than the preregistration sample size). We reproduce the motivated numeracy effect, though we do not find evidence of increased polarization of high-numeracy participants
    • ā€¦
    corecore